211 research outputs found
A Tight Convex Upper Bound on the Likelihood of a Finite Mixture
The likelihood function of a finite mixture model is a non-convex function
with multiple local maxima and commonly used iterative algorithms such as EM
will converge to different solutions depending on initial conditions. In this
paper we ask: is it possible to assess how far we are from the global maximum
of the likelihood? Since the likelihood of a finite mixture model can grow
unboundedly by centering a Gaussian on a single datapoint and shrinking the
covariance, we constrain the problem by assuming that the parameters of the
individual models are members of a large discrete set (e.g. estimating a
mixture of two Gaussians where the means and variances of both Gaussians are
members of a set of a million possible means and variances). For this setting
we show that a simple upper bound on the likelihood can be computed using
convex optimization and we analyze conditions under which the bound is
guaranteed to be tight. This bound can then be used to assess the quality of
solutions found by EM (where the final result is projected on the discrete set)
or any other mixture estimation algorithm. For any dataset our method allows us
to find a finite mixture model together with a dataset-specific bound on how
far the likelihood of this mixture is from the global optimum of the likelihoodComment: icpr 201
Belief Propagation and Revision in Networks with Loops
Local belief propagation rules of the sort proposed by Pearl(1988) are guaranteed to converge to the optimal beliefs for singly connected networks. Recently, a number of researchers have empirically demonstrated good performance of these same algorithms on networks with loops, but a theoretical understanding of this performance has yet to be achieved. Here we lay the foundation for an understanding of belief propagation in networks with loops. For networks with a single loop, we derive ananalytical relationship between the steady state beliefs in the loopy network and the true posterior probability. Using this relationship we show a category of networks for which the MAP estimate obtained by belief update and by belief revision can be proven to be optimal (although the beliefs will be incorrect). We show how nodes can use local information in the messages they receive in order to correct the steady state beliefs. Furthermore we prove that for all networks with a single loop, the MAP estimate obtained by belief revisionat convergence is guaranteed to give the globally optimal sequence of states. The result is independent of the length of the cycle and the size of the statespace. For networks with multiple loops, we introduce the concept of a "balanced network" and show simulati
- …